Goto

Collaborating Authors

 exception handling


Socratic Mind: Impact of a Novel GenAI-Powered Assessment Tool on Student Learning and Higher-Order Thinking

Lee, Jeonghyun, Hung, Jui-Tse, Soylu, Meryem Yilmaz, Popescu, Diana, Cui, Christopher Zhang, Grigoryan, Gayane, Joyner, David A, Harmon, Stephen W

arXiv.org Artificial Intelligence

This study examines the impact of Socratic Mind, a Generative Artificial Intelligence (GenAI) powered formative assessment tool that employs Socratic questioning to support student learning in a large, fully online undergraduate-level computing course. Employing a quasi-experimental, mixed-methods design, we investigated participants' engagement patterns, the influence of user experience on engagement, and impacts on both perceived and actual learning outcomes. Data were collected from the system logs, surveys on user experience and perceived engagement and learning gains, student reflections, and course performance data. Results indicated that participants consistently reported high levels of affective, behavioral, and cognitive engagement, and these were strongly linked to positive user experiences and perceived learning outcomes. Quantitative analysis further revealed that students who engaged with the GenAI tool experienced significant gains in their quiz scores compared to those who did not, particularly benefiting students with lower baseline achievement. Additionally, thematic analysis of qualitative feedback revealed substantial perceived improvements in higher-order thinking skills, including problem solving, critical thinking, and self-reflection. Our findings highlight the promise of AI-mediated dialogue in fostering deeper engagement and higher-order cognitive skills. As higher education institutions expand GenAI integration in curriculum, this dialogic, GenAI powered assessment tool can offer a scalable strategy to promote students' meaningful learning outcomes.


Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework

Zhang, Xuanming, Chen, Yuxuan, Zheng, Yiming, Zhang, Zhexin, Yuan, Yuan, Huang, Minlie

arXiv.org Artificial Intelligence

In real world software development, improper or missing exception handling can severely impact the robustness and reliability of code. Exception handling mechanisms require developers to detect, capture, and manage exceptions according to high standards, but many developers struggle with these tasks, leading to fragile code. This problem is particularly evident in open-source projects and impacts the overall quality of the software ecosystem. To address this challenge, we explore the use of large language models (LLMs) to improve exception handling in code. Through extensive analysis, we identify three key issues: Insensitive Detection of Fragile Code, Inaccurate Capture of Exception Block, and Distorted Handling Solution. These problems are widespread across real world repositories, suggesting that robust exception handling practices are often overlooked or mishandled. In response, we propose Seeker, a multi-agent framework inspired by expert developer strategies for exception handling. Seeker uses agents: Scanner, Detector, Predator, Ranker, and Handler to assist LLMs in detecting, capturing, and resolving exceptions more effectively. Our work is the first systematic study on leveraging LLMs to enhance exception handling practices in real development scenarios, providing valuable insights for future improvements in code reliability.


Seeker: Enhancing Exception Handling in Code with LLM-based Multi-Agent Approach

Zhang, Xuanming, Chen, Yuxuan, Yuan, Yuan, Huang, Minlie

arXiv.org Artificial Intelligence

In real world software development, improper or missing exception handling can severely impact the robustness and reliability of code. Exception handling mechanisms require developers to detect, capture, and manage exceptions according to high standards, but many developers struggle with these tasks, leading to fragile code. This problem is particularly evident in open source projects and impacts the overall quality of the software ecosystem. To address this challenge, we explore the use of large language models (LLMs) to improve exception handling in code. Through extensive analysis, we identify three key issues: Insensitive Detection of Fragile Code, Inaccurate Capture of Exception Types, and Distorted Handling Solutions. These problems are widespread across real world repositories, suggesting that robust exception handling practices are often overlooked or mishandled. In response, we propose Seeker, a multi agent framework inspired by expert developer strategies for exception handling. Seeker uses agents: Scanner, Detector, Predator, Ranker, and Handler to assist LLMs in detecting, capturing, and resolving exceptions more effectively. Our work is the first systematic study on leveraging LLMs to enhance exception handling practices, providing valuable insights for future improvements in code reliability.


Can AI compensate for lack of employee know-how?

#artificialintelligence

A mechanic in an auto shop can run a computer diagnostics program on your vehicle to identify what's wrong, but he can't do the actual physical trouble-shooting himself. A customer service rep can take you through a scripted checklist for trouble-shooting a problem with your air conditioner, but after you've exhausted this checklist, you're both stumped. Meanwhile in IT, the crackerjack no-code developer writes an app and deploys it at record speed, but he's at a loss when the app uses more resources than it should and needs to be tuned. All are examples of how fundamental business processes, and the IT behind them, have become so abstracted away from the actual organic process of doing something that the employees who are charged with performing these tasks simply cannot do them if the predetermined recipe for task performance fails. In a visit I had with a materials engineer in the semiconductor industry, one manager confided that he was deeply concerned that a new generation of material engineers lacked the ability to "develop workarounds" when a particular metal needed for manufacture was in short supply. "In my day, we did this," he said.


Python programming Bible - From Beginner to Advanced

#artificialintelligence

The course starts with basic python introduction and history of python. It also answers basic question on why we should learn python when there are so many programming languages available in the market. It also delves into what can be done in python and what are the areas where python does not score very well. This module covers details about the python installation using anaconda package and steps of python program execution. The python programmer will be able to write their first "hello world" program in python using Jupyter editor and Python shell.


A Must-Have Tool for Every Data Scientist

#artificialintelligence

Let's face it; training a machine learning model is time-consuming. Even with the advancement in computing prowess over the past few years, training machine learning models takes a lot of time. Even the most trivial models have more than a million parameters. On a bigger scale, these models have over a billion parameters(GPT-3 has over 175 billion parameters!), and training these models takes days, if not weeks. As a Data Scientist, we would want to keep an eye on the model's metrics to know if the model performs as per expectations.


11 Best C++ Tutorial For Beginners To Advanced 2020

#artificialintelligence

From this online tutorial of C, you will learn to create your own applications that will run on a wide variety of hardware platforms such as personal computers running Windows, Linux, UNIX, and Mac OS X, etc. Enrollment: 318,259 learners have already enrolled.


How Lyft, Walmart, and Philips are Using AI to Transform Their Businesses - AI Trends

#artificialintelligence

This article is a follow up to my previous one talking about the rise of Artificial Intelligence in an Enterprise. In this article, I will talk about how enterprises in Transportation, Retail and Healthcare are transforming themselves using AI. The use cases vary from transforming back-office applications to bringing compassion back into healthcare to detecting fraud and into the future of autonomous cars. Although I talk about specific enterprises here, the use cases are pretty generic and horizontal. The fraud detection use case, for example, appeals to a large number of verticals where financial transactions and/or user behavior monitoring is essential, including eCommerce, financial and retail environments.


Becoming a 10x Data Scientist - Algorithmia

@machinelearnbot

Recently I gave a talk at PyData Seattle about how to ramp up your data science skills by borrowing tips and tricks from the developer community. These suggestions will help you become a more proficient data scientist who is loved by your team members and stakeholders. Also, if you want to watch the original talk in its entirety check it out here. A 10x developer is someone who is literally 10 times more productive than the average developer. A 10x developer is someone who not only produces more code per hour than the average developer, but they debug like a boss, they introduce less bugs because they test their code, they mentor junior developers, write their own documentation, and have a lot of other broad skills that go beyond knowing how to code.


How To Become a 10x Data Scientist, part 2

@machinelearnbot

Being consistent with your code style is just as important as following naming conventions. To gain some basic style points you should stick to the same case, don't mix camel and snake case together in the same script. It quickly becomes hard to read and navigate your code. Another way you should be consistent is to stick with the same method of accomplishing a task. For instance if you want to remove duplicates from a dictionary and you need to do it in a couple of spots in your code, don't get creative and use a different way to do it just because you saw it on Stack Overflow.